[1] 2944.587
About epistemology
The changing concepts
Experiments and observations
The potential for errors!
Models and Estimations
Two views may be equally valid!
TC Champerlin
In 1890, and again in 1897, Thomas Chrowder Chamberlin wrote “The method of multiple working hypotheses”, in which he advocated the importance of simultaneously evaluating several hypotheses.
Carl Popper
Karl Raimund Popper, argued instead that hypotheses are deductively validated by what he called the “falsifiability criterion.”
John R. Platt
John Rader Platt is noted for his pioneering work on strong inference in the 1960s and his analysis of social science in the 1970s.
Experiments are the hallmark of strong inference, because they isolate experimental units, manipulate treatments, and include randomization, replication, and controls.
Observations take place when a pattern or process is observed and often parsed apart into some measured outcome (or response) and measured input(s).
Process error concerns the errors that arise from imperfections in our understanding of the system we are trying to model
Observation error are those errors resulting from imperfections in how we measure and record the systems and relationships we seek to describe.
Models are the machinery which can be thought of as a description of the system, process, or relationship you are trying to evaluate
Mathematical models vs. Statistical models
Estimation is what makes the model work, or the context in which the parameters are estimated.
Estimation has philosophical underpinnings because it provides inference to how we interpret the data and system.
A simple description of a statistical model:
response = deterministic + stochastic
under fitted or over fitted
Two hypothetical outcomes from a simple Monte Carlo test.
Frequentist “What is the probability of the data that I observed compared to a fixed and unknowable parameter(s).”
Bayesian “What is the probability of certain parameter values based on the data I observed.”
Frequentist asks “The world is fixed and unchanging; therefore, given a certain parameter value how likely am I to observe the data that supports that parameter value?”
Bayesian asks “The only thing I can know about the changing world is what I observe; therefore, based on my data what are the most likely parameter values I could infer.”
infer package which is part of tidymodels metapackage.infer package is centered around 4 main verbs.specify() allows you to specify the variable, or relationship between variables, that you’re interested in.hypothesize() allows you to declare the null hypothesis.generate() allows you to generate data reflecting the null hypothesis.calculate() allows you to calculate a distribution of statistics from the generated data to form the null distribution.Illustrative example: A researcher conducted a population based study in XYZ place and found that the mean birth weight of infants is 2500g.
Looking at the birth weight dataset, the researcher is curious as to whether the mean birth weight in this data set is similar to XYZ population.
In other words, the null hypothesis \(H_0: \mu _{bwt} = 2800 g\)
Test statistic value is calculated.
[1] 2944.587
A simulated distribution is generated, assuming the average birth weight in the population is 2800 gms.
Remember the test_statistic is 2944.59 gms.
Thank You

RIntro: Cohort 7